Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
bfloat16 optimization
# bfloat16 optimization
Gpt2 774M Fineweb 150B
MIT
This model originates from karpathy's llm.c project, converted to HuggingFace format for researching bfloat16 performance, with a training process consuming 150 billion tokens.
Large Language Model
Transformers
G
rhysjones
22
6
Featured Recommended AI Models
Empowering the Future, Your AI Solution Knowledge Base
English
简体中文
繁體中文
にほんご
© 2025
AIbase